Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 174
Filter
1.
Nat Hum Behav ; 2024 May 13.
Article in English | MEDLINE | ID: mdl-38740990

ABSTRACT

The spread of misinformation through media and social networks threatens many aspects of society, including public health and the state of democracies. One approach to mitigating the effect of misinformation focuses on individual-level interventions, equipping policymakers and the public with essential tools to curb the spread and influence of falsehoods. Here we introduce a toolbox of individual-level interventions for reducing harm from online misinformation. Comprising an up-to-date account of interventions featured in 81 scientific papers from across the globe, the toolbox provides both a conceptual overview of nine main types of interventions, including their target, scope and examples, and a summary of the empirical evidence supporting the interventions, including the methods and experimental paradigms used to test them. The nine types of interventions covered are accuracy prompts, debunking and rebuttals, friction, inoculation, lateral reading and verification strategies, media-literacy tips, social norms, source-credibility labels, and warning and fact-checking labels.

2.
Trends Cogn Sci ; 28(5): 383-385, 2024 May.
Article in English | MEDLINE | ID: mdl-38575465

ABSTRACT

This article introduces a theoretical model of truth and honesty from a psychological perspective. We examine its application in political discourse and discuss empirical findings distinguishing between conceptions of honesty and their influence on public perception, misinformation dissemination, and the integrity of democracy.


Subject(s)
Deception , Humans , Politics , Democracy , Models, Psychological
3.
Health Psychol ; 2024 Mar 04.
Article in English | MEDLINE | ID: mdl-38436659

ABSTRACT

OBJECTIVE: We introduce and report early stage testing of a novel, multicomponent intervention that can be used by healthcare professionals (HCPs) to address false or misleading antivaccination arguments while maintaining empathy for and understanding of people's motivations to believe misinformation: the "Empathetic Refutational Interview" (ERI). METHOD: We conducted four experiments in 2022 with participants who were predominantly negative or on the fence about vaccination (total n = 2,545) to test four steps for tailoring an HCP's response to a vaccine-hesitant individual: (a) elicit their concerns, (b) affirm their values and beliefs to the extent possible, (c) refute the misinformed beliefs in their reasoning in a way that is tailored to their psychological motivations, and (d) provide factual information about vaccines. Each of the steps was tested against active control conditions, with participants randomized to conditions. RESULTS: Overall, compared to controls, we found that observing steps of the ERI produced small effects on increasing vaccine acceptance and lowering support for antivaccination arguments. Critically, an HCP who affirmed participants' concerns generated significantly more support for their refutations and subsequent information, with large effects compared to controls. In addition, participants found tailored refutations (compared to control responses) more compelling, and displayed more trust and openness toward the HCP giving them. CONCLUSIONS: The ERI can potentially be leveraged and tested further as a tailored communication tool for HCPs to refute antivaccination misconceptions while maintaining trust and rapport with patients. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

4.
Health Commun ; : 1-9, 2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38450609

ABSTRACT

Research has found that vaccine-promoting messages can elicit state reactance (i.e., negative emotions in response to a perceived threat to behavioral freedom), especially among individuals with high trait reactance (i.e., proneness to experiencing reactance). This can result in a lower willingness to accept vaccines. We investigated whether inoculation against reactance - that is, forewarning individuals about potentially experiencing reactance - can reduce the effects of trait reactance on vaccination willingness. Participants (N = 710) recruited through Facebook were randomly allocated to be either inoculated or not. They were then shown a message promoting a fictitious vaccine, which included either a low, medium, or high threat to freedom. Contrary to research on other health topics, inoculation was ineffective at reducing state reactance toward the vaccination message. Inoculation also did not mitigate the effects of trait reactance on vaccination willingness, and was even counterproductive in some cases. High-reactant individuals were less willing to get vaccinated than low-reactant ones, especially at high freedom threat. Conversely, high freedom threat resulted in increased vaccination willingness among low-reactant individuals. Further research is needed to understand why inoculation against reactance produces different results with vaccination, and to develop communication strategies that mitigate reactance to vaccination campaigns without compromising the positive effects of vaccine recommendations for low-reactant individuals.

5.
PNAS Nexus ; 3(2): pgae035, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38328785

ABSTRACT

The increasing availability of microtargeted advertising and the accessibility of generative artificial intelligence (AI) tools, such as ChatGPT, have raised concerns about the potential misuse of large language models in scaling microtargeting efforts for political purposes. Recent technological advancements, involving generative AI and personality inference from consumed text, can potentially create a highly scalable "manipulation machine" that targets individuals based on their unique vulnerabilities without requiring human input. This paper presents four studies examining the effectiveness of this putative "manipulation machine." The results demonstrate that personalized political ads tailored to individuals' personalities are more effective than nonpersonalized ads (studies 1a and 1b). Additionally, we showcase the feasibility of automatically generating and validating these personalized ads on a large scale (studies 2a and 2b). These findings highlight the potential risks of utilizing AI and microtargeting to craft political messages that resonate with individuals based on their personality traits. This should be an area of concern to ethicists and policy makers.

6.
Proc Conf Assoc Comput Linguist Meet ; 2023: 2339-2349, 2023 May.
Article in English | MEDLINE | ID: mdl-37997575

ABSTRACT

The dissemination of false information on the internet has received considerable attention over the last decade. Misinformation often spreads faster than mainstream news, thus making manual fact checking inefficient or, at best, labor-intensive. Therefore, there is an increasing need to develop methods for automatic detection of misinformation. Although resources for creating such methods are available in English, other languages are often underrepresented in this effort. With this contribution, we present IRMA, a corpus containing over 600,000 Italian news articles (335+ million tokens) collected from 56 websites classified as 'untrustworthy' by professional factcheckers. The corpus is freely available and comprises a rich set of text- and website-level data, representing a turnkey resource to test hypotheses and develop automatic detection algorithms. It contains texts, titles, and dates (from 2004 to 2022), along with three types of semantic measures (i.e., keywords, topics at three different resolutions, and LIWC lexical features). IRMA also includes domainspecific information such as source type (e.g., political, health, conspiracy, etc.), quality, and higher-level metadata, including several metrics of website incoming traffic that allow to investigate user online behavior. IRMA constitutes the largest corpus of misinformation available today in Italian, making it a valid tool for advancing quantitative research on untrustworthy news detection and ultimately helping limit the spread of misinformation.

7.
Article in English | MEDLINE | ID: mdl-37942873

ABSTRACT

Anti-science attitudes can be resilient to scientific evidence if they are rooted in psychological motives. One such motive is trait reactance, which refers to the need to react with opposition when one's freedom of choice has been threatened. In three studies, we investigated trait reactance as a psychological motivation to reject vaccination. In the longitudinal studies (n = 199; 293), we examined if trait reactance measured before the COVID-19 pandemic was related to people's willingness to get vaccinated against COVID-19 up to 2 years later during the pandemic. In the experimental study (n = 398), we tested whether trait reactance makes anti-vaccination attitudes more resistant to information and whether this resistance can be mitigated by framing the information to minimize the risk of triggering state reactance. The longitudinal studies showed that higher trait reactance before the COVID-19 pandemic was related to lower willingness to get vaccinated against COVID-19. Our experimental study indicated that highly reactant individuals' willingness to vaccinate was unaffected by the amount and framing of the information provided. Trait reactance has a strong and durable impact on vaccination willingness. This highlights the importance of considering the role of trait reactance in people's vaccination-related decision-making.

8.
Curr Opin Psychol ; 54: 101711, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37944324

ABSTRACT

Democracy relies on a shared body of knowledge among citizens, for example trust in elections and reliable knowledge to inform policy-relevant debate. We review the evidence for widespread disinformation campaigns that are undermining this shared knowledge. We establish a common pattern by which science and scientists are discredited and how the most recent frontier in those attacks involves researchers in misinformation itself. We list several ways in which psychology can contribute to countermeasures.


Subject(s)
Communication , Democracy , Humans , Politics
9.
Eur Psychol ; 28(3): a000493, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37994309

ABSTRACT

The spread of false and misleading information in online social networks is a global problem in need of urgent solutions. It is also a policy problem because misinformation can harm both the public and democracies. To address the spread of misinformation, policymakers require a successful interface between science and policy, as well as a range of evidence-based solutions that respect fundamental rights while efficiently mitigating the harms of misinformation online. In this article, we discuss how regulatory and nonregulatory instruments can be informed by scientific research and used to reach EU policy objectives. First, we consider what it means to approach misinformation as a policy problem. We then outline four building blocks for cooperation between scientists and policymakers who wish to address the problem of misinformation: understanding the misinformation problem, understanding the psychological drivers and public perceptions of misinformation, finding evidence-based solutions, and co-developing appropriate policy measures. Finally, through the lens of psychological science, we examine policy instruments that have been proposed in the EU, focusing on the strengthened Code of Practice on Disinformation 2022.

10.
Curr Dir Psychol Sci ; 32(1): 81-88, 2023 Feb.
Article in English | MEDLINE | ID: mdl-37994317

ABSTRACT

Low-quality and misleading information online can hijack people's attention, often by evoking curiosity, outrage, or anger. Resisting certain types of information and actors online requires people to adopt new mental habits that help them avoid being tempted by attention-grabbing and potentially harmful content. We argue that digital information literacy must include the competence of critical ignoring-choosing what to ignore and where to invest one's limited attentional capacities. We review three types of cognitive strategies for implementing critical ignoring: self-nudging, in which one ignores temptations by removing them from one's digital environments; lateral reading, in which one vets information by leaving the source and verifying its credibility elsewhere online; and the do-not-feed-the-trolls heuristic, which advises one to not reward malicious actors with attention. We argue that these strategies implementing critical ignoring should be part of school curricula on digital information literacy. Teaching the competence of critical ignoring requires a paradigm shift in educators' thinking, from a sole focus on the power and promise of paying close attention to an additional emphasis on the power of ignoring. Encouraging students and other online users to embrace critical ignoring can empower them to shield themselves from the excesses, traps, and information disorders of today's attention economy.

11.
Sci Commun ; 45(4): 539-554, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37994373

ABSTRACT

Effective science communication is challenging when scientific messages are informed by a continually updating evidence base and must often compete against misinformation. We argue that we need a new program of science communication as collective intelligence-a collaborative approach, supported by technology. This would have four key advantages over the typical model where scientists communicate as individuals: scientific messages would be informed by (a) a wider base of aggregated knowledge, (b) contributions from a diverse scientific community, (c) participatory input from stakeholders, and (d) better responsiveness to ongoing changes in the state of knowledge.

12.
J Appl Res Mem Cogn ; 12(3): 325-334, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37829768

ABSTRACT

Corrected misinformation can continue to influence inferential reasoning. It has been suggested that such continued influence is partially driven by misinformation familiarity, and that corrections should therefore avoid repeating misinformation to avoid inadvertent strengthening of misconceptions. However, evidence for such familiarity-backfire effects is scarce. We tested whether familiarity backfire may occur if corrections are processed under cognitive load. Although misinformation repetition may boost familiarity, load may impede integration of the correction, reducing its effectiveness and therefore allowing a backfire effect to emerge. Participants listened to corrections that repeated misinformation while in a driving simulator. Misinformation familiarity was manipulated through the number of corrections. Load was manipulated through a math task administered selectively during correction encoding. Multiple corrections were more effective than a single correction; cognitive load reduced correction effectiveness, with a single correction entirely ineffective under load. This provides further evidence against familiarity-backfire effects and has implications for real-world debunking.

13.
Nat Hum Behav ; 7(12): 2140-2151, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37749196

ABSTRACT

The spread of online misinformation on social media is increasingly perceived as a problem for societal cohesion and democracy. The role of political leaders in this process has attracted less research attention, even though politicians who 'speak their mind' are perceived by segments of the public as authentic and honest even if their statements are unsupported by evidence. By analysing communications by members of the US Congress on Twitter between 2011 and 2022, we show that politicians' conception of honesty has undergone a distinct shift, with authentic belief speaking that may be decoupled from evidence becoming more prominent and more differentiated from explicitly evidence-based fact speaking. We show that for Republicans-but not Democrats-an increase in belief speaking of 10% is associated with a decrease of 12.8 points of quality (NewsGuard scoring system) in the sources shared in a tweet. In contrast, an increase in fact-speaking language is associated with an increase in quality of sources for both parties. Our study is observational and cannot support causal inferences. However, our results are consistent with the hypothesis that the current dissemination of misinformation in political discourse is linked to an alternative understanding of truth and honesty that emphasizes invocation of subjective belief at the expense of reliance on evidence.


Subject(s)
Communication , Language , Humans
14.
PNAS Nexus ; 2(9): pgad286, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37719749

ABSTRACT

One widely used approach for quantifying misinformation consumption and sharing is to evaluate the quality of the news domains that a user interacts with. However, different media organizations and fact-checkers have produced different sets of news domain quality ratings, raising questions about the reliability of these ratings. In this study, we compared six sets of expert ratings and found that they generally correlated highly with one another. We then created a comprehensive set of domain ratings for use by the research community (github.com/hauselin/domain-quality-ratings), leveraging an ensemble "wisdom of experts" approach. To do so, we performed imputation together with principal component analysis to generate a set of aggregate ratings. The resulting rating set comprises 11,520 domains-the most extensive coverage to date-and correlates well with other rating sets that have more limited coverage. Together, these results suggest that experts generally agree on the relative quality of news domains, and the aggregate ratings that we generate offer a powerful research tool for evaluating the quality of news consumed or shared and the efficacy of misinformation interventions.

15.
Hum Vaccin Immunother ; 19(2): 2256442, 2023 08.
Article in English | MEDLINE | ID: mdl-37724556

ABSTRACT

Mandatory vaccinations are widely debated since they restrict individuals' autonomy in their health decisions. As healthcare professionals (HCPs) are a common target group of vaccine mandates, and also form a link between vaccination policies and the public, understanding their attitudes toward vaccine mandates is important. The present study investigated physicians' attitudes to COVID-19 vaccine mandates in four European countries: Finland, France, Germany, and Portugal. An electronic survey assessing attitudes to COVID-19 vaccine mandates and general vaccination attitudes (e.g. perceived vaccine safety, trust in health authorities, and openness to patients) was sent to physicians in the spring of 2022. A total of 2796 physicians responded. Across all countries, 78% of the physicians were in favor of COVID-19 vaccine mandates for HCPs, 49% favored COVID-19 vaccine mandates for the public, and 67% endorsed COVID-19 health passes. Notable differences were observed between countries, with attitudes to mandates found to be more positive in countries where the mandate, or similar mandates, were in effect. The associations between attitudes to mandates and general vaccination attitudes were mostly small to neglectable and differed between countries. Nevertheless, physicians with more positive mandate attitudes perceived vaccines as more beneficial (in Finland and France) and had greater trust in medical authorities (in France and Germany). The present study contributes to the body of research within social and behavioral sciences that support evidence-based vaccination policymaking.


Subject(s)
COVID-19 Vaccines , COVID-19 , Humans , Cross-Sectional Studies , COVID-19/prevention & control , Attitude of Health Personnel , Vaccination
16.
Hum Vaccin Immunother ; 19(2): 2242748, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37581343

ABSTRACT

Vaccine hesitancy has become a threat to public health, especially as it is a phenomenon that has also been observed among healthcare professionals. In this study, we analyzed the relationship between endorsement of complementary and alternative medicine (CAM) and vaccination attitudes and behaviors among healthcare professionals, using a cross-sectional sample of physicians with vaccination responsibilities from four European countries: Germany, Finland, Portugal, and France (total N = 2,787). Our results suggest that, in all the participating countries, CAM endorsement is associated with lower frequency of vaccine recommendation, lower self-vaccination rates, and being more open to patients delaying vaccination, with these relationships being mediated by distrust in vaccines. A latent profile analysis revealed that a profile characterized by higher-than-average CAM endorsement and lower-than-average confidence and recommendation of vaccines occurs, to some degree, among 19% of the total sample, although these percentages varied from one country to another: 23.72% in Germany, 17.83% in France, 9.77% in Finland, and 5.86% in Portugal. These results constitute a call to consider health care professionals' attitudes toward CAM as a factor that could hinder the implementation of immunization campaigns.


Subject(s)
Complementary Therapies , Physicians , Vaccines , Humans , Cross-Sectional Studies , Vaccination Hesitancy , Health Knowledge, Attitudes, Practice , Vaccination
17.
Expert Rev Vaccines ; 22(1): 726-737, 2023.
Article in English | MEDLINE | ID: mdl-37507356

ABSTRACT

BACKGROUND: Healthcare professionals (HCPs) play an important role in vaccination; those with low confidence in vaccines are less likely to recommend them to their patients and to be vaccinated themselves. The study's purpose was to adapt and validate long- and short-form versions of the International Professionals' Vaccine Confidence and Behaviors (I-Pro-VC-Be) questionnaire to measure psychosocial determinants of HCPs' vaccine confidence and their associations with vaccination behaviors in European countries. RESEARCH DESIGN AND METHODS: After the original French-language Pro-VC-Be was culturally adapted and translated, HCPs involved in vaccination (mainly GPs and pediatricians) across Germany, Finland, France, and Portugal completed a cross-sectional online survey in 2022. A 10-factor multigroup confirmatory factor analysis (MG-CFA) of the long-form (10 factors comprising 34 items) tested for measurement invariance across countries. Modified multiple Poisson regressions tested the criterion validity of both versions. RESULTS: 2,748 HCPs participated. The 10-factor structure fit was acceptable to good everywhere. The final MG-CFA model confirmed strong factorial invariance and showed very good fit. The long- and short-form I-Pro-VC-Be had good criterion validity with vaccination behaviors. CONCLUSION: This study validates the I-Pro-VC-Be among HCPs in four European countries; including long- and short-form tools for use in research and public health.


Subject(s)
Vaccines , Humans , Cross-Sectional Studies , Vaccination , Europe , Surveys and Questionnaires , Delivery of Health Care
18.
Perspect Psychol Sci ; : 17456916231180809, 2023 Jul 10.
Article in English | MEDLINE | ID: mdl-37427579

ABSTRACT

Most content consumed online is curated by proprietary algorithms deployed by social media platforms and search engines. In this article, we explore the interplay between these algorithms and human agency. Specifically, we consider the extent of entanglement or coupling between humans and algorithms along a continuum from implicit to explicit demand. We emphasize that the interactions people have with algorithms not only shape users' experiences in that moment but because of the mutually shaping nature of such systems can also have longer-term effects through modifications of the underlying social-network structure. Understanding these mutually shaping systems is challenging given that researchers presently lack access to relevant platform data. We argue that increased transparency, more data sharing, and greater protections for external researchers examining the algorithms are required to help researchers better understand the entanglement between humans and algorithms. This better understanding is essential to support the development of algorithms with greater benefits and fewer risks to the public.

19.
Sci Rep ; 13(1): 11219, 2023 07 17.
Article in English | MEDLINE | ID: mdl-37460585

ABSTRACT

The proliferation of anti-vaccination arguments online can threaten immunisation programmes, including those targeting COVID-19. To effectively refute misinformed views about vaccination, communicators need to go beyond providing correct information and debunking of misconceptions, and must consider the underlying motivations of people who hold contrarian views. Drawing on a taxonomy of anti-vaccination arguments that identified 11 "attitude roots"-i.e., psychological attributes-that motivate an individual's vaccine-hesitant attitude, we assessed whether these attitude roots were identifiable in argument endorsements and responses to psychological construct measures corresponding to the presumed attitude roots. In two UK samples (total n = 1250), we found that participants exhibited monological belief patterns in their highly correlated endorsements of anti-vaccination arguments drawn from different attitude roots, and that psychological constructs representing the attitude roots significantly predicted argument endorsement strength and vaccine hesitancy. We identified four different latent anti-vaccination profiles amongst our participants' responses. We conclude that endorsement of anti-vaccination arguments meaningfully dovetails with attitude roots clustering around anti-scientific beliefs and partisan ideologies, but that the balance between those attitudes differs considerably between people. Communicators must be aware of those individual differences.


Subject(s)
COVID-19 , Humans , COVID-19/prevention & control , Vaccination/psychology , Attitude , Vaccination Hesitancy , Motivation
20.
Nat Hum Behav ; 7(9): 1462-1480, 2023 09.
Article in English | MEDLINE | ID: mdl-37460761

ABSTRACT

The proliferation of anti-vaccination arguments is a threat to the success of many immunization programmes. Effective rebuttal of contrarian arguments requires an approach that goes beyond addressing flaws in the arguments, by also considering the attitude roots-that is, the underlying psychological attributes driving a person's belief-of opposition to vaccines. Here, through a pre-registered systematic literature review of 152 scientific articles and thematic analysis of anti-vaccination arguments, we developed a hierarchical taxonomy that relates common arguments and themes to 11 attitude roots that explain why an individual might express opposition to vaccination. We further validated our taxonomy on coronavirus disease 2019 anti-vaccination misinformation, through a combination of human coding and machine learning using natural language processing algorithms. Overall, the taxonomy serves as a theoretical framework to link expressed opposition of vaccines to their underlying psychological processes. This enables future work to develop targeted rebuttals and other interventions that address the underlying motives of anti-vaccination arguments.


Subject(s)
COVID-19 , Vaccines , Humans , COVID-19/prevention & control , Vaccination/psychology , Dissent and Disputes , Communication
SELECTION OF CITATIONS
SEARCH DETAIL
...